443 research outputs found

    Calibration of wind speed ensemble forecasts for power generation

    Get PDF
    In the last decades wind power became the second largest energy source in the EU covering 16% of its electricity demand. However, due to its volatility, accurate short range wind power predictions are required for successful integration of wind energy into the electrical grid. Accurate predictions of wind power require accurate hub height wind speed forecasts, where the state of the art method is the probabilistic approach based on ensemble forecasts obtained from multiple runs of numerical weather prediction models. Nonetheless, ensemble forecasts are often uncalibrated and might also be biased, thus require some form of post-processing to improve their predictive performance. We propose a novel flexible machine learning approach for calibrating wind speed ensemble forecasts, which results in a truncated normal predictive distribution. In a case study based on 100m wind speed forecasts produced by the operational ensemble prediction system of the Hungarian Meteorological Service, the forecast skill of this method is compared with the predictive performance of three different ensemble model output statistics approaches and the raw ensemble forecasts. We show that compared with the raw ensemble, post-processing always improves the calibration of probabilistic and accuracy of point forecasts and from the four competing methods the novel machine learning based approach results in the best overall performance.Comment: 15 pages, 5 figure

    Parametric model for post-processing visibility ensemble forecasts

    Full text link
    Despite the continuous development of the different operational ensemble prediction systems over the past decades, ensemble forecasts still might suffer from lack of calibration and/or display systematic bias, thus require some post-processing to improve their forecast skill. Here we focus on visibility, which quantity plays a crucial role e.g. in aviation and road safety or in ship navigation, and propose a parametric model where the predictive distribution is a mixture of a gamma and a truncated normal distribution, both right censored at the maximal reported visibility value. The new model is evaluated in two case studies based on visibility ensemble forecasts of the European Centre for Medium-Range Weather Forecasts covering two distinct domains in Central and Western Europe and two different time periods. The results of the case studies indicate that climatology is substantially superior to the raw ensemble; nevertheless, the forecast skill can be further improved by post-processing, at least for short lead times. Moreover, the proposed mixture model consistently outperforms the Bayesian model averaging approach used as reference post-processing technique.Comment: 26 pages, 14 figures, 2 table

    A two-step machine learning approach to statistical post-processing of weather forecasts for power generation

    Full text link
    By the end of 2021, the renewable energy share of the global electricity capacity reached 38.3% and the new installations are dominated by wind and solar energy, showing global increases of 12.7% and 18.5%, respectively. However, both wind and photovoltaic energy sources are highly volatile making planning difficult for grid operators, so accurate forecasts of the corresponding weather variables are essential for reliable electricity predictions. The most advanced approach in weather prediction is the ensemble method, which opens the door for probabilistic forecasting; though ensemble forecast are often underdispersive and subject to systematic bias. Hence, they require some form of statistical post-processing, where parametric models provide full predictive distributions of the weather variables at hand. We propose a general two-step machine learning-based approach to calibrating ensemble weather forecasts, where in the first step improved point forecasts are generated, which are then together with various ensemble statistics serve as input features of the neural network estimating the parameters of the predictive distribution. In two case studies based of 100m wind speed and global horizontal irradiance forecasts of the operational ensemble pre diction system of the Hungarian Meteorological Service, the predictive performance of this novel method is compared with the forecast skill of the raw ensemble and the state-of-the-art parametric approaches. Both case studies confirm that at least up to 48h statistical post-processing substantially improves the predictive performance of the raw ensemble for all considered forecast horizons. The investigated variants of the proposed two-step method outperform in skill their competitors and the suggested new approach is well applicable for different weather quantities and for a fair range of predictive distributions.Comment: 25 pages, 12 figures, 4 table

    A multi-dimensional model to the digital maturity life-cycle for SMEs

    Get PDF
    As companies try to maintain and strengthen their competitive advantage, they should be aware of the level of their digital maturity. The study aims to present a methodology that helps to determine the position of a small and medium-sized enterprise in the digital maturity life-cycle. This is performed on the basis of maturity and digital maturity models, and company growth theories. A number of studies and models have been prepared to determine digital maturity on the basis of various sectoral criteria, but these are all one-dimensional. The study therefore proposes a multi-dimensional model for determining the digital maturity life-cycle of small and medium-sized enterprises that takes into account companies’ digital maturity, the IT intensity of various sectors and their organizational characteristics. The model defines five maturity levels together with their relevant characteristics, classified into three levels in terms of data- information. It can help small and medium-sized enterprises adopt more accurate decisions regarding areas in need of development

    Stochastic Simulation of Droplet Interactions in Suspension Polymerization of Vinyl Chloride

    Get PDF
    In this paper a population balance based mathematical model is presented for describing suspension polymerization of vinyl chloride. The properties of the polymer product and the behaviour of the stirred batch polymerization reactor are investigated by simulation. Two-phase kinetics model of free radical polymerization is used, and heat balance is also included into the model. Beside the coalescence and breakage phenomena, are taken interchanges of species and heat between the droplets induced by collisions into account forming a complex threescale system. The motion of droplets in the physical space of the polymerization reactor, the breakage, coalescence and coalescence/ redispersion processes are simulated by using a coupled continuous time – Monte Carlo method

    Models of Scholarly Communication and Citation Analysis

    Get PDF
    Informetric/bibliometric analyses have to a large extent been relying on an assumption that research is essentially cumulative in its nature, which is not the least visible in the rational for using citation analyses to assess quality of research. However, when reviewing both the theoretical literature on how research is organized and studies analyzing the structures of research fields through informetric mapping methods, it becomes clear that cumulative organization is just one category of several ways of organizing research and scholarly communication, Consequently, the way the role of citations is interpreted in research assessment has to be revised. Based on the review of previous research, this paper suggests a model for categorizing different modes of scholarly communication. We test this model through three different kinds of semantic labelling analyses on abstracts and research papers from the fields of biomedicine, computer science and educational research. The model proposed suggests three main categories of scholarly communication: cumulative, negotiating and distinctive; and when matching the labels identified in the semantic analysis to the three categories, we find evidence of the three different ways of communicating research that supports the model

    A multi-dimensional model to the digital maturity life-cycle for SMEs

    Get PDF
    As companies try to maintain and strengthen their competitive advantage, they should be aware of the level of their digital maturity. The study aims to present a methodology that helps to determine the position of a small and medium-sized enterprise in the digital maturity life-cycle. This is performed on the basis of maturity and digital maturity models, and company growth theories. A number of studies and models have been prepared to determine digital maturity on the basis of various sectoral criteria, but these are all one-dimensional. The study therefore proposes a multi-dimensional model for determining the digital maturity life-cycle of small and medium-sized enterprises that takes into account companies’ digital maturity, the IT intensity of various sectors and their organizational characteristics. The model defines five maturity levels together with their relevant characteristics, classified into three levels in terms of data-information. It can help small and medium-sized enterprises adopt more accurate decisions regarding areas in need of development

    Visual analytics of academic writing

    Get PDF
    This paper describes a novel analytics dashboard which visualises the key features of scholarly documents. The Dashboard aggregates the salient sentences of scholarly papers, their rhetorical types and the key concepts mentioned within these sentences. These features are extracted from papers through a Natural Language Processing (NLP) technology, called Xerox Incremental Parser (XIP). The XIP Dashboard is a set of visual analytics modules based on the XIP output. In this paper, we briefly introduce the XIP technology and demonstrate an example visualisation of the XIP Dashboard
    corecore